AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Multi-source data fine-tuning

# Multi-source data fine-tuning

Whisper Turbo Ksc2
MIT
This is an automatic speech recognition model fine-tuned on approximately 1000 hours of Kazakh speech data based on the Whisper large-v3-turbo model, with a character error rate of 9.16% on the test set.
Speech Recognition Transformers Other
W
abilmansplus
1,740
1
Wav2vec2 Xlsr 300m Finnish Lm
Apache-2.0
Finnish automatic speech recognition (ASR) model fine-tuned based on Facebook's wav2vec2-xls-r-300m, trained with 275.6 hours of Finnish data, supports decoding with KenLM language model
Speech Recognition Transformers Other
W
aapot
15
0
Wav2vec2 Large Xlsr Catala
Apache-2.0
Catalan speech recognition model fine-tuned from facebook/wav2vec2-large-xlsr-53, trained on Common Voice and parliamentary speech datasets
Speech Recognition Other
W
softcatala
64.30k
0
Wav2vec2 Xlsr 1b Finnish
Apache-2.0
A fine-tuned version of Facebook's wav2vec2-xls-r-1b model for Finnish automatic speech recognition (ASR), trained with 259.57 hours of annotated Finnish speech data
Speech Recognition Transformers Other
W
aapot
18
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase